Bayesian Unsupervised Learning for Source Separation with Mixture of Gaussians Prior

نویسندگان

  • Hichem Snoussi
  • Ali Mohammad-Djafari
چکیده

This paper considers the problem of source separation in the case of noisy instantaneous mixtures. In a previous work [1], sources have been modeled by a mixture of Gaussians leading to an hierarchical Bayesian model by considering the labels of the mixture as i.i.d hidden variables. We extend this modelization to incorporate a Markovian structure for the labels. This extension is important for practical applications which are abundant: unsupervised classification and segmentation, pattern recognition and speech signal processing. In order to estimate the mixing matrix and the a priori model parameters, we consider observations as incomplete data. The missing data are sources and labels: sources are missing data for observations and labels are missing data for incomplete missing sources. This hierarchical modelization leads to specific restoration maximization type algorithms. Restoration step can be held in three different manners: (i) Complete likelihood is estimated by its conditional expectation. This leads to the EM (expectation-maximization) algorithm [2], (ii) Missing data are estimated by their maximum a posteriori. This leads to JMAP (Joint maximum a posteriori) algorithm [3], (iii) Missing data are sampled from their a posteriori distributions. This leads to the SEM (stochastic EM) algorithm [4]. A Gibbs sampling scheme is implemented to generate missing data. We have also introduced a relaxation strategy into these algorithms to reduce the computational cost which is due to the exponential influence of the number of source components and the number of the mixture Gaussian components.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bayesian source separation with mixture of Gaussians prior for sources and Gaussian prior for mixture coefficients

Abstract. In this contribution, we present new algorithms to source separation for the case of noisy instantaneous linear mixture, within the Bayesian statistical framework. The source distribution prior is modeled by a mixture of Gaussians [1] and the mixing matrix elements distributions by a Gaussian [2]. We model the mixture of Gaussians hierarchically by mean of hidden variables representin...

متن کامل

Underdetermined Model-Based Blind Source Separation of Reverberant Speech Mixtures using Spatial Cues in a Variational Bayesian Framework

In this paper, we propose a new method for underdetermined blind source separation of reverberant speech mixtures by classifying each time-frequency (T-F) point of the mixtures according to a combined variational Bayesian model of spatial cues, under sparse signal representation assumption. We model the T-F observations by a variational mixture of circularly-symmetric complex-Gaussians. The spa...

متن کامل

Learning mixtures of Gaussians with maximum-a-posteriori oracle

We consider the problem of estimating the parameters of a mixture of distributions, where each component distribution is from a given parametric family e.g. exponential, Gaussian etc. We define a learning model in which the learner has access to a “maximum-a-posteriori” oracle which given any sample from a mixture of distributions, tells the learner which component distribution was the most lik...

متن کامل

Bayesian Nonlinear Independent Component Analysis by Multi-Layer Perceptrons

In this chapter, a nonlinear extension to independent component analysis is developed. The nonlinear mapping from source signals to observations is modelled by a multi-layer perceptron network and the distributions of source signals are modelled by mixture-of-Gaussians. The observations are assumed to be corrupted by Gaussian noise and therefore the method is more adequately described as nonlin...

متن کامل

Variational Gaussian mixtures for blind source detection

Bayesian algorithms have lately been used in a large variety of applications. This paper proposes a new methodology for hyperparameter initialization in the Variational Bayes (VB) algorithm. We employ a dual expectationmaximization (EM) algorithm as the initialization stage in the VB-based learning. In the first stage, the EM algorithm is used on the given data set while the second EM algorithm...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • VLSI Signal Processing

دوره 37  شماره 

صفحات  -

تاریخ انتشار 2004